Week 8: Expectation Maximization
نویسنده
چکیده
So far, we discussed clustering algorithms that involve a hard assignment of each datapoint to a cluster, typically based on its proximity to other points in that cluster. However, simply assigning points to the nearest cluster is not always adequate to capture more complex structure. For example, the lecture slides show an example where one cluster is much larger and less dense than another. We’ll discuss a probabilistic clustering method that can capture this by introducing two new ingredients: soft assignment of points xi to each cluster, and using additional parameters to discuss the shape of a cluster, so as to capture the fact that clusters can be large, small, sparse, or dense.
منابع مشابه
Week 9: Expectation Maximization
Last week, we saw how we could represent clustering with a probabilistic model. In this model, called a Gaussian mixture model, we model each datapoint x i as originating from some cluster, with a corresponding cluster label y i distributed according to p(y), and the corresponding distribution for that cluster given by a multivariate Gaussian: p(x|y = k) =
متن کاملThe basic idea behind Expectation-Maximization
3 The Expectation-Maximization algorithm 7 3.1 Jointly-non-concave incomplete log-likelihood . . . . . . . . . . . 7 3.2 (Possibly) Concave complete data log-likelihood . . . . . . . . . . 8 3.3 The general EM derivation . . . . . . . . . . . . . . . . . . . . . 10 3.4 The E& M-steps . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.5 The EM algorithm . . . . . . . . . . . . . . . . . . ...
متن کاملThe basic idea of Expectation-Maximization
3 The Expectation-Maximization algorithm 7 3.1 Jointly-non-concave incomplete log-likelihood . . . . . . . . . . . 7 3.2 (Possibly) Concave complete data log-likelihood . . . . . . . . . . 8 3.3 The general EM derivation . . . . . . . . . . . . . . . . . . . . . 10 3.4 The E& M-steps . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.5 The EM algorithm . . . . . . . . . . . . . . . . . . ...
متن کاملBlind source separation with time series variational Bayes expectation maximization algorithm
Article history: Available online 8 October 2010
متن کاملThe Basic Idea of EM
4 The Expectation-Maximization algorithm 7 4.1 Jointly-non-concave incomplete log-likelihood . . . . . . . . . . . 7 4.2 (Possibly) Concave complete data log-likelihood . . . . . . . . . . 8 4.3 The general EM derivation . . . . . . . . . . . . . . . . . . . . . 9 4.4 The E& M-steps . . . . . . . . . . . . . . . . . . . . . . . . . . 11 4.5 The EM algorithm . . . . . . . . . . . . . . . . . . ....
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2016